Sellers ignored a commission reduction that could have earned them more money. Nothing in the product showed them the margin math. This case study traces the design from a lightweight CSV probe through to a production-ready in-product interface.
Sellers weren't joining Back Market's commission reduction programme. The incentive wasn't weak. The earning potential was invisible inside the product.
The seller-facing deals experience: from a lightweight CSV probe for validation, through a gamified deals dashboard in the back office, to a per-listing margin tradeoff tool.
300+ sellers participating. 915 self-serve CSV downloads in 3 months. A deals dashboard that made campaign progress visible inside the product for the first time.
The commission reduction campaign launched in Q3 2025 as a seller acquisition lever, a way to drive higher listing volume from both new and existing sellers by temporarily reducing our take rate. But adoption was slow. Sellers were ignoring the campaign entirely, even when it directly affected their account. The campaign messaging lived only in email and dashboard notifications. For the sellers who did see it, there was no product surface showing how much they could actually earn by participating.
Qualitative research across ~12 sellers revealed that pricing decisions follow a formula — sourcing cost, return risk, cost of business, target margin — and commission is just one variable in it. A message saying "commission reduced temporarily" didn't translate to "I should list more." Sellers needed to see the actual margin math before they'd act.
Each iteration solved a progressively deeper problem: visibility, then engagement, then decision confidence. The progression wasn't a scheduled improvement cycle. Each version generated the signal that defined what came next.
Campaign messaging reached sellers by email only. No product surface existed — no way to check which listings qualified, no margin context, no self-serve data. Adoption was slow, but without any in-product signal it was impossible to know whether the problem was the offer, the messaging, or the missing interface.
Hi Back Market Seller,
We have an exclusive offer for you.
From 1 November, we're reducing the commission fee on the following models from 10% to 8%:
What's next? Adjust your prices to stay competitive and take advantage of the lower rate.
This offer runs until 30 November. Standard commission rates apply to all other products per your Seller Terms and Conditions.
If you have any questions, contact the Seller Support Center.
Kind regards,
The Back Market team
There was pressure to build the full in-product experience immediately. My argument: we didn't know if the problem was the missing UI or the offer itself. The CSV was a cheap validation: qualifying listings and margin calculations in a downloadable format, added as a secondary action on a campaign banner. If nobody downloaded it, we'd know the interface wasn't the gap.
The CSV confirmed sellers would engage with financial data. The next step was bringing that data into the product, but not as a static table. The research showed sellers needed two things: visibility into what deals were available and where they stood, and a sense of progress toward better commission rates. The deals dashboard introduced a dedicated surface inside the seller back office, a drawer accessible from the listings page that showed each active campaign as a card with real-time progress tracking.
The gamification was intentional but restrained: progress bars and milestone thresholds, not badges or leaderboards. The goal was to make the earning trajectory visible, not to manufacture urgency. Sellers could see exactly how many units separated them from the next commission reduction, and make their own call about whether to push for it.
The deals dashboard made campaigns visible and trackable. But "your commission drops from 9% to 8%" still didn't answer the question sellers were actually asking: is it worth adjusting my pricing for this specific listing? Each listing has a different sourcing cost, return rate, and volume expectation. A percentage improvement means something different for every item in a catalogue.
The tool calculates whether campaign participation yields better net earnings than a seller's current approach, per listing, based on actual pricing history and volume data. Two design decisions shaped how it presents that calculation. First, a table rather than individual listing views: sellers manage catalogues, not single items, and the format needed to support scanning across dozens of rows at once. Second, a three-state recommendation: Participate, Review, or nothing. "Don't participate" was deliberately excluded. The research finding from the Problem phase held here: sellers' resistance to coercion meant the interface could surface a recommendation but couldn't make the decision for them. "Review" flags the edge cases without closing them off.
The programme launched with email-only outreach and no product surface. There was no way to measure whether sellers were ignoring the offer or simply couldn't evaluate it. 300+ active participants is the first adoption metric for a programme that previously had none. The CSV probe shipped to answer a single question: would sellers engage with financial data if given access? 915 downloads in three months closed that debate and justified the engineering investment in a dedicated product surface.
The most consequential finding was the 31%. Hidden inside a headline commission reduction was per-listing economics that frequently worked in sellers' favour, but only the margin tradeoff tool made that visible at the individual listing level.